brain data
Deep ADMM-Net for Compressive Sensing MRI
yan yang, Jian Sun, Huibin Li, Zongben Xu
Compressive Sensing (CS) is an effective approach for fast Magnetic Resonance Imaging (MRI). It aims at reconstructing MR image from a small number of under-sampled data in k -space, and accelerating the data acquisition in MRI. To improve the current MRI system in reconstruction accuracy and computational speed, in this paper, we propose a novel deep architecture, dubbed ADMM-Net. ADMM-Net is defined over a data flow graph, which is derived from the iterative procedures in Alternating Direction Method of Multipliers (ADMM) algorithm for optimizing a CS-based MRI model. In the training phase, all parameters of the net, e.g., image transforms, shrinkage functions, etc., are discriminatively trained end-to-end using L-BFGS algorithm. In the testing phase, it has computational overhead similar to ADMM but uses optimized parameters learned from the training data for CS-based reconstruction task. Experiments on MRI image reconstruction under different sampling ratios in k -space demonstrate that it significantly improves the baseline ADMM algorithm and achieves high reconstruction accuracies with fast computational speed.
- Asia > China > Shaanxi Province > Xi'an (0.05)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Predicting Brain Morphogenesis via Physics-Transfer Learning
Zhao, Yingjie, Song, Yicheng, Xu, Fan, Xu, Zhiping
Brain morphology is shaped by genetic and mechanical factors and is linked to biological development and diseases. Its fractal-like features, regional anisotropy, and complex curvature distributions hinder quantitative insights in medical inspections. Recognizing that the underlying elastic instability and bifurcation share the same physics as simple geometries such as spheres and ellipses, we developed a physics-transfer learning framework to address the geometrical complexity. To overcome the challenge of data scarcity, we constructed a digital library of high-fidelity continuum mechanics modeling that both describes and predicts the developmental processes of brain growth and disease. The physics of nonlinear elasticity from simple geometries is embedded into a neural network and applied to brain models. This physics-transfer approach demonstrates remarkable performance in feature characterization and morphogenesis prediction, highlighting the pivotal role of localized deformation in dominating over the background geometry. The data-driven framework also provides a library of reduced-dimensional evolutionary representations that capture the essential physics of the highly folded cerebral cortex. Validation through medical images and domain expertise underscores the deployment of digital-twin technology in comprehending the morphological complexity of the brain.
- Asia > China > Shanghai > Shanghai (0.04)
- North America > United States > New Jersey (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Revealing Neurocognitive and Behavioral Patterns by Unsupervised Manifold Learning from Dynamic Brain Data
Zhou, Zixia, Liu, Junyan, Wu, Wei Emma, Fang, Ruogu, Liu, Sheng, Wei, Qingyue, Yan, Rui, Guo, Yi, Tao, Qian, Wang, Yuanyuan, Islam, Md Tauhidul, Xing, Lei
Dynamic brain data, teeming with biological and functional insights, are becoming increasingly accessible through advanced measurements, providing a gateway to understanding the inner workings of the brain in living subjects. However, the vast size and intricate complexity of the data also pose a daunting challenge in reliably extracting meaningful information across various data sources. This paper introduces a generalizable unsupervised deep manifold learning for exploration of neurocognitive and behavioral patterns. Unlike existing methods that extract patterns directly from the input data as in the existing methods, the proposed Brain-dynamic Convolutional-Network-based Embedding (BCNE) seeks to capture the brain-state trajectories by deciphering the temporospatial correlations within the data and subsequently applying manifold learning to this correlative representation. The performance of BCNE is showcased through the analysis of several important dynamic brain datasets. The results, both visual and quantitative, reveal a diverse array of intriguing and interpretable patterns. BCNE effectively delineates scene transitions, underscores the involvement of different brain regions in memory and narrative processing, distinguishes various stages of dynamic learning processes, and identifies differences between active and passive behaviors. BCNE provides an effective tool for exploring general neuroscience inquiries or individual-specific patterns.
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Education (0.90)
Fiduciary AI for the Future of Brain-Technology Interactions
Bhattacharjee, Abhishek, Pilkington, Jack, Farahany, Nita
Brain foundation models represent a new frontier in AI: instead of processing text or images, these models interpret real-time neural signals from EEG, fMRI, and other neurotechnologies. When integrated with brain-computer interfaces (BCIs), they may enable transformative applications-from thought controlled devices to neuroprosthetics-by interpreting and acting on brain activity in milliseconds. However, these same systems pose unprecedented risks, including the exploitation of subconscious neural signals and the erosion of cognitive liberty. Users cannot easily observe or control how their brain signals are interpreted, creating power asymmetries that are vulnerable to manipulation. This paper proposes embedding fiduciary duties-loyalty, care, and confidentiality-directly into BCI-integrated brain foundation models through technical design. Drawing on legal traditions and recent advancements in AI alignment techniques, we outline implementable architectural and governance mechanisms to ensure these systems act in users' best interests. Placing brain foundation models on a fiduciary footing is essential to realizing their potential without compromising self-determination.
- Europe (0.14)
- Asia > Japan (0.04)
- South America > Chile (0.04)
- (2 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- (3 more...)
Brain Foundation Models: A Survey on Advancements in Neural Signal Processing and Brain Discovery
Zhou, Xinliang, Liu, Chenyu, Chen, Zhisheng, Wang, Kun, Ding, Yi, Jia, Ziyu, Wen, Qingsong
Brain foundation models (BFMs) have emerged as a transformative paradigm in computational neuroscience, offering a revolutionary framework for processing diverse neural signals across different brain-related tasks. These models leverage large-scale pre-training techniques, allowing them to generalize effectively across multiple scenarios, tasks, and modalities, thus overcoming the traditional limitations faced by conventional artificial intelligence (AI) approaches in understanding complex brain data. By tapping into the power of pretrained models, BFMs provide a means to process neural data in a more unified manner, enabling advanced analysis and discovery in the field of neuroscience. In this survey, we define BFMs for the first time, providing a clear and concise framework for constructing and utilizing these models in various applications. We also examine the key principles and methodologies for developing these models, shedding light on how they transform the landscape of neural signal processing. This survey presents a comprehensive review of the latest advancements in BFMs, covering the most recent methodological innovations, novel views of application areas, and challenges in the field. Notably, we highlight the future directions and key challenges that need to be addressed to fully realize the potential of BFMs. These challenges include improving the quality of brain data, optimizing model architecture for better generalization, increasing training efficiency, and enhancing the interpretability and robustness of BFMs in real-world applications.
- North America > United States (0.14)
- Asia > China > Beijing > Beijing (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
BrainWavLM: Fine-tuning Speech Representations with Brain Responses to Language
Vattikonda, Nishitha, Vaidya, Aditya R., Antonello, Richard J., Huth, Alexander G.
Speech encoding models use auditory representations to predict how the human brain responds to spoken language stimuli. Most performant encoding models linearly map the hidden states of artificial neural networks to brain data, but this linear restriction may limit their effectiveness. In this work, we use low-rank adaptation (LoRA) to fine-tune a WavLM-based encoding model end-to-end on a brain encoding objective, producing a model we name BrainWavLM. We show that fine-tuning across all of cortex improves average encoding performance with greater stability than without LoRA. This improvement comes at the expense of low-level regions like auditory cortex (AC), but selectively fine-tuning on these areas improves performance in AC, while largely retaining gains made in the rest of cortex. Fine-tuned models generalized across subjects, indicating that they learned robust brain-like representations of the speech stimuli. Finally, by training linear probes, we showed that the brain data strengthened semantic representations in the speech model without any explicit annotations. Our results demonstrate that brain fine-tuning produces best-in-class speech encoding models, and that non-linear methods have the potential to bridge the gap between artificial and biological representations of semantics.
- North America > United States > Texas > Travis County > Austin (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- North America > United States > Hawaii (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
Review for NeurIPS paper: Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction
Summary and Contributions: This paper presents a re-analysis of the MEG experiment of Sudre et al (2012), where participants were tasked with responding to a question about the meaning of an object concept word (e.g. In the original Sudre et al analysis, the focus was on testing the predictive power of different perceptual and semantic feature models of the concept word for the MEG data. In the current study, the focus is on the role of the task question that precedes the concept word, and in particular whether and how the semantics of the task question modulates the subsequent processing and neural activity time-locked to the stimulus word. This is an interesting neurocognitive question, as it sheds light on how lexical-semantic representation and access can be modulated by the preceding context, and how the timing of processing of the target concept word that is independent of the task demands relates to the timing of the processing that involves integrating that conceptual knowledge with the task requirements in order to respond on the task. To analyze the data, the authors construct vector-based semantic models of both the concept words and the task questions, using human responses from separate questions and concepts where the participants rated the truth of the task questions for the concepts.
Aligning Brain Activity with Advanced Transformer Models: Exploring the Role of Punctuation in Semantic Processing
Lamprou, Zenon, Polick, Frank, Moshfeghi, Yashar
This research examines the congruence between neural activity and advanced transformer models, emphasizing the semantic significance of punctuation in text understanding. Utilizing an innovative approach originally proposed by Toneva and Wehbe, we evaluate four advanced transformer models RoBERTa, DistiliBERT, ALBERT, and ELECTRA against neural activity data. Our findings indicate that RoBERTa exhibits the closest alignment with neural activity, surpassing BERT in accuracy. Furthermore, we investigate the impact of punctuation removal on model performance and neural alignment, revealing that BERT's accuracy enhances in the absence of punctuation. This study contributes to the comprehension of how neural networks represent language and the influence of punctuation on semantic processing within the human brain.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Oceania > Australia > Victoria > Melbourne (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (7 more...)
Probing the contents of semantic representations from text, behavior, and brain data using the psychNorms metabase
Hussain, Zak, Mata, Rui, Newell, Ben R., Wulff, Dirk U.
Semantic representations are integral to natural language processing, psycholinguistics, and artificial intelligence. Although often derived from internet text, recent years have seen a rise in the popularity of behavior-based (e.g., free associations) and brain-based (e.g., fMRI) representations, which promise improvements in our ability to measure and model human representations. We carry out the first systematic evaluation of the similarities and differences between semantic representations derived from text, behavior, and brain data. Using representational similarity analysis, we show that word vectors derived from behavior and brain data encode information that differs from their text-derived cousins. Furthermore, drawing on our psychNorms metabase, alongside an interpretability method that we call representational content analysis, we find that, in particular, behavior representations capture unique variance on certain affective, agentic, and socio-moral dimensions. We thus establish behavior as an important complement to text for capturing human representations and behavior. These results are broadly relevant to research aimed at learning human-aligned semantic representations, including work on evaluating and aligning large language models.
- Europe > Switzerland > Basel-City > Basel (0.04)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (3 more...)
The Download: Google's AI podcasts, and protecting your brain data
Google's new AI podcasting tool, called Audio Overview, has become a surprise viral hit. The podcasting feature was launched in mid-September as part of NotebookLM, a year-old AI-powered research assistant. NotebookLM, which is powered by Google's Gemini 1.5 model, allows people to upload content such as links, videos, PDFs, and text. They can then ask the system questions about the content, and it offers short summaries. The tool generates a podcast called Deep Dive, which features a male and a female voice discussing whatever you uploaded.
- Health & Medicine (0.41)
- Law (0.38)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.60)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.60)